241 research outputs found

    Material Classification of Magnetic Resonance Volume Data

    Get PDF
    A major unsolved problem in computer graphics is that of making high-quality models. Traditionally, models have consisted of interactively or algorithmically described collections of graphics primitives such as polygons. The process of constructing these models is painstaking and often misses features and behavior that we wish to model. Models extracted from volume data collected from real, physical objects have the potential to show features and behavior that are difficult to capture using these traditional modeling methods. We use vector-valued magnetic resonance volume data in this thesis. The process of extracting models from such data involves four main steps: collecting the sampled volume data; preprocessing it to reduce artifacts from the collection process; classifying materials within the data; and creating either a rigid geometric model that is static, or a flexible, dynamic model that can be simulated. In this thesis we focus on the the first three steps. We present guidelines and techniques for collecting and processing magnetic resonance data to meet the needs of the later steps. Our material classification and model extraction techniques work better when the data values for a given material are constant throughout the dataset, when data values for different materials are different, and when the dataset is free of aliasing artifacts and noise. We present a new material-classification method that operates on vector-valued volume data. The method produces a continuous probability function for each material over the volume of the dataset, and requires no expert interaction to teach it different material classes. It operates by fitting peaks in the histogram of a collected dataset using parameterized gaussian bumps, and by using Bayes' law to calculate material probabilities, with each gaussian bump representing one material. To illustrate the classification method, we apply it to real magnetic resonance data of a human head, a human hand, a banana, and a jade plant. From the classified data, we produce "computationally stained" slices that discriminate among materials better than do the original grey-scale versions. We also generate volume-rendered images of classified datasets clearly showing different anatomical features of various materials. Finally, we extract preliminary static and dynamic geometric models of different tissues

    Classification of Material Mixtures in Volume Data for Visualization and Modeling

    Get PDF
    Material classification is a key stop in creating computer graphics models and images from volume data, We present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with Magnetic Resonance Imaging (NMI) or Computed Tomography (CT). The algorithm assumes that voxels can contain more than one material, e.g. both muscle and fat; we wish to compute the relative proportion of each material in the voxels. Other classification methods have utilized Gaussian probability density functions to model the distribution of values within a dataset. These Gaussian basis functions work well for voxels with unmixed materials, but do not work well where the materials are mixed together. We extend this approach by deriving non-Gaussian "mixture" basis functions. We treat a voxel as a volume, not as a single point. We use the distribution of values within each voxel-sized volume to identify materials within the voxel using a probabilistic approach. The technique reduces the classification artifacts that occur along boundaries between materials. The technique is useful for making higher quality geometric models and renderings from volume data, and has the potential to make more accurate volume measurements. It also classifies noisy, low-resolution data well

    Partial-volume Bayesian classification of material mixtures in MR volume data using voxel histograms

    Get PDF
    The authors present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with magnetic resonance imaging (MRI) or computed tomography (CT). Because the authors allow for mixtures of materials and treat voxels as regions, their technique reduces errors that other classification techniques can create along boundaries between materials and is particularly useful for creating accurate geometric models and renderings from volume data. It also has the potential to make volume measurements more accurately and classifies noisy, low-resolution data well. There are two unusual aspects to the authors' approach. First, they assume that, due to partial-volume effects, or blurring, voxels can contain more than one material, e.g., both muscle and fat; the authors compute the relative proportion of each material in the voxels. Second, they incorporate information from neighboring voxels into the classification process by reconstructing a continuous function, ρ(x), from the samples and then looking at the distribution of values that ρ(x) takes on within the region of a voxel. This distribution of values is represented by a histogram taken over the region of the voxel; the mixture of materials that those values measure is identified within the voxel using a probabilistic Bayesian approach that matches the histogram by finding the mixture of materials within each voxel most likely to have created the histogram. The size of regions that the authors classify is chosen to match the sparing of the samples because the spacing is intrinsically related to the minimum feature size that the reconstructed continuous function can represent

    09302 Abstracts Collection -- New Developments in the Visualization and Processing of Tensor Fields

    Get PDF
    From 19.07. to 24.07.2009, the Dagstuhl Seminar 09302 ``New Developments in the Visualization and Processing of Tensor Fields \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Pure phase-encoded MRI and classification of solids

    Get PDF
    Here, the authors combine a pure phase-encoded magnetic resonance imaging (MRI) method with a new tissue-classification technique to make geometric models of a human tooth. They demonstrate the feasibility of three-dimensional imaging of solids using a conventional 11.7-T NMR spectrometer. In solid-state imaging, confounding line-broadening effects are typically eliminated using coherent averaging methods. Instead, the authors circumvent them by detecting the proton signal at a fixed phase-encode time following the radio-frequency excitation. By a judicious choice of the phase-encode time in the MRI protocol, the authors differentiate enamel and dentine sufficiently to successfully apply a new classification algorithm. This tissue-classification algorithm identifies the distribution of different material types, such as enamel and dentine, in volumetric data. In this algorithm, the authors treat a voxel as a volume, not as a single point, and assume that each voxel may contain more than one material. They use the distribution of MR image intensities within each voxel-sized volume to estimate the relative proportion of each material using a probabilistic approach. This combined approach, involving MRI and data classification, is directly applicable to bone imaging and hard-tissue contrast-based modeling of biological solids

    De-aliasing Undersampled Volume Images for Visualization

    Get PDF
    We present and illustrate a new technique, Image Correlation Supersampling (ICS), for resampling volume data that are undersampled in one dimension. The resulting data satisfies the sampling theorem, and, therefore, many visualization algorithms that assume the theorem is satisfied can be applied to the data. Without the supersampling the visualization algorithms create artifacts due to aliasing. The assumptions made in developing the algorithm are often satisfied by data that is undersampled temporally. Through this supersampling we can completely characterize phenomena with measurements at a coarser temporal sampling rate than would otherwise be necessary. This can save acquisition time and storage space, permit the study of faster phenomena, and allow their study without introducing aliasing artifacts. The resampling technique relies on a priori knowledge of the measured phenomenon, and applies, in particular, to scalar concentration measurements of fluid flow. Because of the characteristics of fluid flow, an image deformation that takes each slice image to the next can be used to calculate intermediate slice images at arbitrarily fine spacing. We determine the deformation with an automatic, multi-resolution algorithm

    Visualizing Diffusion Tensor Images of the Mouse Spinal Cord

    Get PDF
    Within biological systems water molecules undergo continuous stochastic Brownian motion. The rate of this diffusion can give clues to the structure of underlying tissues. In some tissues the rate is anisotropic - faster in some directions than others. Diffusion-rate images are second-order tensor fields and can be calculated from diffusion-weighted magnetic resonance images. A 2D diffusion tensor image (DTI) and an associated anatomical scalar field, created during the tensor calculation, define seven dependent values at each spatial location. Understanding the interrelationships among these values is necessary to understand the data. We present two new methods for visually representing DTIs. The first method displays an array of ellipsoids where the shape of each ellipsoid represents one tensor value. The novel aspect of this representation is that the ellipsoids are all normalized to approximately the same size so that they can be displayed in context. The second method uses concepts from oil painting to represent the seven-valued data with multiple layers of varying brush strokes. Both methods successfully display most or all of the information in DTIs and provide exploratory methods for understanding them. The ellipsoid method has a simpler interpretation and explanation than the painting-motivated method; the painting-motivated method displays more of the information and is easier to read quantitatively. We demonstrate the methods on images of the mouse spinal cord. The visualizations show significant differences between spinal cords from mice suffering from Experimental Allergic Encephalomyelitis (EAE) and spinal cords from wild-type mice. The differences are consistent with pathology differences shown histologically and suggest that our new non-invasive imaging methodology and visualization of the results could have early diagnostic value for neurodegenerative diseases
    corecore